Your Browser Just Got Smarter — and More Dangerous
OpenAI recently released a new “browser” powered by artificial intelligence (AI).
Simon Willison, a well-known developer, wrote a detailed post explaining how OpenAI is trying to keep this new tool secure. He reviewed the company’s security statement point by point. If you’re interested in the deep technical details, his post is worth reading.
Here’s the main takeaway in simpler terms: OpenAI seems to be taking reasonable steps to handle the security risks that come with combining a browser (which lets you explore the web) and a large language model (LLM, like ChatGPT). But even OpenAI’s Chief Information Security Officer admits that one big problem — called “prompt injection” — still doesn’t have a real solution.
In other words, while OpenAI is trying to make this safe, we’re still in experimental territory. Using an AI browser today means taking part in a global test on how secure this new kind of tool can be. So it might be wise to wait before assuming all their safety measures are foolproof.
(Side note: Putting people on the “frontier” of security for everyday browsing seems risky, doesn’t it? Developer Tom MacWright has even argued that putting an AI chatbot between people and the internet is bound to cause serious problems eventually.)
Where Two Risky Worlds Meet: AI Browsers and Software Supply Chains
After reading Simon’s article, what really stands out is how two hot topics — AI browsers and npm supply chain attacks — could potentially mix in dangerous ways. To explain:
- npm is a system developers use to share and reuse small pieces of software (called “packages”).
- Supply chain attacks happen when hackers sneak something malicious into these shared packages, which then spread to many people or companies who use them.
Now imagine combining that with AI browsers, which can read and act on natural language (plain English text). That combination could open up new ways for attackers to exploit systems.
A Simple Example of How This Could Happen
Let’s say you’re a hacker. You upload a seemingly harmless software package that includes some normal English text — not computer code, just instructions written in plain language. No malicious code runs when people install your package. Everything looks clean.
Later, this package gets bundled (combined) with other software and eventually delivered to end users — ordinary people using AI browsers. Then something unexpected happens: The AI part of the browser reads those “plain English” instructions and follows them — doing something harmful that you, the attacker, secretly intended.
Notice what happened here:
- No malicious computer code ever ran.
- The browser itself wasn’t hacked.
- The attack happened because the AI understood and acted on human language that looked innocent but was actually designed to manipulate it.
So in this new world, even plain text can be used as a weapon. The space for possible attacks just got much, much bigger.
I’m not a security expert, but reading about this makes me think more and more about how critical cybersecurity has become. It feels like the one area of technology that keeps getting more complicated — and more dangerous — every year.